Convergence of approximated gradient method for Elman network ?

نویسندگان

  • Dongpo Xu
  • Zhengxue Li
  • Wei Wu
چکیده

An approximated gradient method for training Elman networks is considered. For finite sample set, the error function is proved to be monotone in the training process, and the approximated gradient of the error function tends to zero if the weights sequence is bounded. Furthermore, after adding a moderate condition, the weights sequence itself is also proved to be convergent. A numerical example is given to support the theoretical findings.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A conjugate gradient based method for Decision Neural Network training

Decision Neural Network is a new approach for solving multi-objective decision-making problems based on artificial neural networks. Using inaccurate evaluation data, network training has improved and the number of educational data sets has decreased. The available training method is based on the gradient decent method (BP). One of its limitations is related to its convergence speed. Therefore,...

متن کامل

Traffic Signal Prediction Using Elman Neural Network and Particle Swarm Optimization

Prediction of traffic is very crucial for its management. Because of human involvement in the generation of this phenomenon, traffic signal is normally accompanied by noise and high levels of non-stationarity. Therefore, traffic signal prediction as one of the important subjects of study has attracted researchers’ interests. In this study, a combinatorial approach is proposed for traffic signal...

متن کامل

CSLMEN: A New Optimized Method for Training Levenberg Marquardt Elman Network Based Cuckoo Search Algorithm

RNNs have local feedback loops within the network which allows them to shop earlier accessible patterns. This network can be educated with gradient descent back propagation and optimization technique such as second-order methods; conjugate gradient, quasi-Newton, Levenberg-Marquardt have also been used for networks training [14, 15]. But still this algorithm is not definite to find the global m...

متن کامل

Weight Optimization in Recurrent Neural Networks with Hybrid Metaheuristic Cuckoo Search Techniques for Data Classification

Recurrent neural network (RNN) has been widely used as a tool in the data classification. This network can be educated with gradient descent back propagation. However, traditional training algorithms have some drawbacks such as slow speed of convergence being not definite to find the global minimum of the error function since gradient descent may get stuck in local minima. As a solution, nature...

متن کامل

A Novel Learning Method for Elman Neural Network Using Local Search

− Elman Neural Network (ENN) have been efficient identification tool in many areas since they have dynamic memories. However, the local minima problem usually occurs in the process of the learning because of the employed back propagation algorithm. In this paper, we propose a novel learning method for ENN by introducing adaptive learning parameter into the traditional local search algorithm. Th...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2008